Meta-learning digitized-counterdiabatic quantum optimization

نویسندگان

چکیده

Abstract The use of variational quantum algorithms for optimization tasks has emerged as a crucial application the current noisy intermediate-scale computers. However, these face significant difficulties in finding suitable ansatz and appropriate initial parameters. In this paper, we employ meta-learning using recurrent neural networks to address issues recently proposed digitized-counterdiabatic approximate algorithm (QAOA). By combining counterdiabaticity, find parameters reduce number iterations required. We demonstrate effectiveness our approach by applying it MaxCut problem Sherrington–Kirkpatrick model. Our method offers short-depth circuit with optimal parameters, thus improving performance state-of-the-art QAOA.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Meta-learning and meta-optimization

Meta-learning is a method of improving results of algorithm by learning from metafeatures which describe problem instances and from results produced by various algorithms on these instances. In this project we tried to apply this idea, which was already proved to be useful in machine learning, to combinatorial optimization. We have developed a general software tool called SEAGE to extract meta-...

متن کامل

Meta-optimization of Quantum-Inspired Evolutionary Algorithm

In this paper, a meta-optimization algorithm, based on Local Unimodal Sampling (LUS), has been applied to tune selected parameters of QuantumInspired Evolutionary Algorithm for numerical optimization problems coded in real numbers. Tuning of the following two parameters has been considered: crossover rate and contraction factor. Performance landscapes of the algorithm meta-fitness have been app...

متن کامل

Meta-learning approach to neural network optimization

Optimization of neural network topology, weights and neuron transfer functions for given data set and problem is not an easy task. In this article, we focus primarily on building optimal feed-forward neural network classifier for i.i.d. data sets. We apply meta-learning principles to the neural network structure and function optimization. We show that diversity promotion, ensembling, self-organ...

متن کامل

Initializing Bayesian Hyperparameter Optimization via Meta-Learning

Model selection and hyperparameter optimization is crucial in applying machine learning to a novel dataset. Recently, a subcommunity of machine learning has focused on solving this problem with Sequential Model-based Bayesian Optimization (SMBO), demonstrating substantial successes in many applications. However, for computationally expensive algorithms the overhead of hyperparameter optimizatio...

متن کامل

Scalable Meta-Learning for Bayesian Optimization

Bayesian optimization has become a standard technique for hyperparameter optimization, including data-intensive models such as deep neural networks that may take days or weeks to train. We consider the setting where previous optimization runs are available, and we wish to use their results to warm-start a new optimization run. We develop an ensemble model that can incorporate the results of pas...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Quantum science and technology

سال: 2023

ISSN: ['2364-9054', '2364-9062']

DOI: https://doi.org/10.1088/2058-9565/ace54a